Skip to content

Conversation

@winskuo-quic
Copy link
Collaborator

@winskuo-quic winskuo-quic commented Aug 5, 2025

Summary

  • Support 16bit KV IO for runner. (Capable to run either 8bit or 16bit)
  • Adding README for script to run Qwen2.5 0.5B
  • Improving the PPL score for Qwen2.5 0.5B from 18->12.
  • Fixing BC CI bug.

Sample Script
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s $DEVICE -m SM8750 --prompt "What is 1+1?" --temperature 0 --model_mode kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5 --eval_perplexity --tasks wikitext --limit 1 --artifact ./16bit_qwen_1024 --enable_masked_softmax --r3

Stats with QNN2.37.0 on SM8750

Accuracy: 12ppl (Align with prepare_pt2e and convert_pt2e)
Token Rate: ~130tok/sec, depending on seq_len.
image

Test plan

Added E2E test to test_qnn_delegate.py

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13127

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit f7954d3 with merge base c8a0706 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 5, 2025
@github-actions
Copy link

github-actions bot commented Aug 5, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/qwen_16_bit branch from 2ee4964 to f9efbc5 Compare August 5, 2025 13:47
@winskuo-quic winskuo-quic marked this pull request as ready for review August 5, 2025 14:56
@winskuo-quic winskuo-quic requested a review from cccclai as a code owner August 5, 2025 14:56
@winskuo-quic winskuo-quic marked this pull request as draft August 5, 2025 16:06
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/qwen_16_bit branch from 645a69c to 7bed64b Compare August 6, 2025 05:35
@winskuo-quic winskuo-quic marked this pull request as ready for review August 6, 2025 07:19
@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this in D79958806.

@cccclai
Copy link
Contributor

cccclai commented Aug 10, 2025

There seem to be merge conflict..

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/qwen_16_bit branch from 7bed64b to f7954d3 Compare August 11, 2025 03:20
@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this in D79958806.

@haowhsu-quic
Copy link
Collaborator

Hi @cccclai, I wonder if we could have this merged? It would be great to have this and we can submit PR for statistics real quick.

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this in D79958806.

@cccclai
Copy link
Contributor

cccclai commented Aug 13, 2025

could

Yeah trying to merge but ran into merge conflict, checking again

Copy link
Contributor

@cccclai cccclai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the change!

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this in D79958806.

@cccclai cccclai merged commit d5cd4f3 into pytorch:main Aug 13, 2025
101 of 104 checks passed
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
…pytorch#13127)

### Summary

- Support 16bit KV IO for runner. (Capable to run either 8bit or 16bit)
- Adding README for script to run Qwen2.5 0.5B
- Improving the PPL score for Qwen2.5 0.5B from 18->12.
- Fixing BC CI bug.

Sample Script
`python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s
$DEVICE -m SM8750 --prompt "What is 1+1?" --temperature 0 --model_mode
kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5
--eval_perplexity --tasks wikitext --limit 1 --artifact
./16bit_qwen_1024 --enable_masked_softmax --r3`

#### Stats with QNN2.37.0 on SM8750
Accuracy: 12ppl (Align with prepare_pt2e and convert_pt2e)
Token Rate:  ~130tok/sec, depending on seq_len.
<img width="1658" height="877" alt="image"
src="https://github.com/user-attachments/assets/8fa19068-5613-4329-a527-52f3e02d408f"
/>


### Test plan
Added E2E test to `test_qnn_delegate.py`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants